68 research outputs found

    Harvesting vibrational energy with liquid-bridged electrodes: thermodynamics in mechanically and electrically driven RC-circuits

    Full text link
    We theoretically study a vibrating pair of parallel electrodes bridged by a (deformed) liquid droplet, which is a recently developed microfluidic device to harvest vibrational energy. The device can operate with various liquids, including liquid metals, electrolytes, as well as ionic liquids. We numerically solve the Young-Laplace equation for all droplet shapes during a vibration period, from which the time-dependent capacitance follows that serves as input for an equivalent circuit model. We first investigate two existing energy harvesters (with a constant and a vanishing bias potential), for which we explain an open issue related to their optimal electrode separations, which is as small as possible or as large as possible in the two cases, respectively. Then we propose a new engine with a time-dependent bias voltage, with which the harvested work and the power can be increased by orders of magnitude at low vibration frequencies and by factors 2-5 at high frequencies, where frequencies are to be compared to the inverse RC-time of the circuit.Comment: 9 pages, 6 figure

    Benchmarking optimization algorithms for auto-tuning GPU kernels

    Full text link
    Recent years have witnessed phenomenal growth in the application, and capabilities of Graphical Processing Units (GPUs) due to their high parallel computation power at relatively low cost. However, writing a computationally efficient GPU program (kernel) is challenging, and generally only certain specific kernel configurations lead to significant increases in performance. Auto-tuning is the process of automatically optimizing software for highly-efficient execution on a target hardware platform. Auto-tuning is particularly useful for GPU programming, as a single kernel requires re-tuning after code changes, for different input data, and for different architectures. However, the discrete, and non-convex nature of the search space creates a challenging optimization problem. In this work, we investigate which algorithm produces the fastest kernels if the time-budget for the tuning task is varied. We conduct a survey by performing experiments on 26 different kernel spaces, from 9 different GPUs, for 16 different evolutionary black-box optimization algorithms. We then analyze these results and introduce a novel metric based on the PageRank centrality concept as a tool for gaining insight into the difficulty of the optimization problem. We demonstrate that our metric correlates strongly with observed tuning performance.Comment: in IEEE Transactions on Evolutionary Computation, 202

    Rocket: Efficient and Scalable All-Pairs Computations on Heterogeneous Platforms

    Get PDF
    All-pairs compute problems apply a user-defined function to each combination of two items of a given data set. Although these problems present an abundance of parallelism, data reuse must be exploited to achieve good performance. Several researchers considered this problem, either resorting to partial replication with static work distribution or dynamic scheduling with full replication. In contrast, we present a solution that relies on hierarchical multi-level software-based caches to maximize data reuse at each level in the distributed memory hierarchy, combined with a divide-and-conquer approach to exploit data locality, hierarchical work-stealing to dynamically balance the workload, and asynchronous processing to maximize resource utilization. We evaluate our solution using three real-world applications (from digital forensics, localization microscopy, and bioinformatics) on different platforms (from a desktop machine to a supercomputer). Results shows excellent efficiency and scalability when scaling to 96 GPUs, even obtaining super-linear speedups due to a distributed cache

    The Oceanographic Multipurpose Software Environment (OMUSE v1.0)

    Get PDF
    In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model

    A spatial column-store to triangulate the Netherlands on the fly

    Get PDF
    3D digital city models, important for urban planning, are currently constructed from massive point clouds obtained through airborne LiDAR (Light Detection and Ranging). They are semantically enriched with information obtained from auxiliary GIS data like Cadastral data which contains information about the boundaries of properties, road networks, rivers, lakes etc. Technical advances in the LiDAR data acquisition systems made possible the rapid acquisition of high resolution topographical information for an entire country. Such data sets are now reaching the trillion points barrier. To cope with this data deluge and provide up-to-date 3D digital city models on demand current geospatial management strategies should be re-thought. This work presents a column-oriented Spatial Database Management System which provides in-situ data access, effective data skipping, efficient spatial operations, and interactive data visualization. Its efficiency and scalability is demonstrated using a dense LiDAR scan of The Netherlands consisting of 640 billion points and the latest Cadastral information, and compared with PostGIS

    Creating a reusable cross-disciplinary multi-scale and multi-physics framework: From AMUSE to OMUSE and beyond

    Get PDF
    Here, we describe our efforts to create a multi-scale and multi-physics framework that can be retargeted across different disciplines. Currently we have implemented our approach in the astrophysical domain, for which we developed AMUSE (github.com/amusecode/amuse ), and generalized this to the oceanographic and climate sciences, which led to the development of OMUSE (bitbucket.org/omuse ). The objective of this paper is to document the design choices that led to the successful implementation of these frameworks as well as the future challenges in applying this approach to other domains

    Global sensitivity analysis in hydrological modeling: Review of concepts, methods, theoretical framework, and applications

    Get PDF
    Sensitivity analysis (SA) aims to identify the key parameters that affect model performance and it plays important roles in model parameterization, calibration, optimization, and uncertainty quantification. However, the increasing complexity of hydrological models means that a large number of parameters need to be estimated. To better understand how these complex models work, efficient SA methods should be applied before the application of hydrological modeling. This study provides a comprehensive review of global SA methods in the field of hydrological modeling. The common definitions of SA and the typical categories of SA methods are described. A wide variety of global SA methods have been introduced to provide a more efficient evaluation framework for hydrological modeling. We review, analyze, and categorize research into global SA methods and their applications, with an emphasis on the research accomplished in the hydrological modeling field. The advantages and disadvantages are also discussed and summarized. An application framework and the typical practical steps involved in SA for hydrological modeling are outlined. Further discussions cover several important and often overlooked topics, including the relationship between parameter identification, uncertainty analysis, and optimization in hydrological modeling, how to deal with correlated parameters, and time-varying SA. Finally, some conclusions and guidance recommendations on SA in hydrological modeling are provided, as well as a list of important future research directions that may facilitate more robust analyses when assessing hydrological modeling performance

    2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS guideline for the diagnosis and management of patients with stable ischemic heart disease

    Get PDF
    The recommendations listed in this document are, whenever possible, evidence based. An extensive evidence review was conducted as the document was compiled through December 2008. Repeated literature searches were performed by the guideline development staff and writing committee members as new issues were considered. New clinical trials published in peer-reviewed journals and articles through December 2011 were also reviewed and incorporated when relevant. Furthermore, because of the extended development time period for this guideline, peer review comments indicated that the sections focused on imaging technologies required additional updating, which occurred during 2011. Therefore, the evidence review for the imaging sections includes published literature through December 2011
    • …
    corecore